Learn R Programming

fields (version 8.3-6)

fields testing scripts: Testing fields functions

Description

Some of the basic methods in fields can be tested by directly implementing the linear algebra using matrix expressions and other functions can be cross checked within fields. These comparisons are done in the the R source code test files in the tests subdirectory of fields. The function test.for.zero is useful for comparing the tests in a meaninful and documented way.

Usage

test.for.zero( xtest, xtrue,  tol= 1.0e-8, relative=TRUE, tag=NULL)

Arguments

Details

IMPORTANT: If the R object test.for.zero.flag exists with any value ( e.g. test.for.zero.flag <- 1 ) then when the test fails this function will also generate an error in addition to printing a message. This option is added to insure that any test scripts will generate an error when any individual test fails.

An example: > test.for.zero( 1:10, 1:10 + 1e-10, tag="First test") Testing: First test PASSED test at tolerance 1e-08

> test.for.zero( 1:10, 1:10 + 1e-10, tag="First test", tol=1e-12) Testing: First test FAILED test value = 1.818182e-10 at tolerance 1e-12 > test.for.zero.flag <- 1 Testing: First test FAILED test value = 1.818182e-10 at tolerance 1e-12 Error in test.for.zero(1:10, 1:10 + 1e-10, tag = "First test", tol = 1e-12) :

The scripts in the tests subdirectory are

[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

To run the tests just attach the fields library and source the testing file. In the fields source code these are in a subdirectory "tests". Compare the output to the "XXX.Rout.save" text file.

test.for.zero is used to print out the result for each individual comparison. Failed tests are potentially bad and are reported with a string beginning

"FAILED test value = ... "

If the object test.for.zero.flag exists then an error is also generated when the test fails.

FORM OF COMPARISON: The actual test done is the sum of absolute differnces:

test value = sum( abs(c(xtest) - c( xtrue) ) ) /denom

Where denom is either mean( abs(c(xtrue))) for relative error or 1.0 otherwise.

Note the use of "c" here to stack any structure in xtest and xtrue into a vector.